Name | Version | Summary | date |
torch-tensorrt |
2.3.0 |
Torch-TensorRT is a package which allows users to automatically compile PyTorch and TorchScript modules to TensorRT while remaining in PyTorch |
2024-06-10 21:29:36 |
xuelang-Xclient |
0.0.9 |
Triton Inference Server Client |
2024-06-10 12:12:58 |
vsrealesrgan |
5.0.0 |
Real-ESRGAN function for VapourSynth |
2024-06-10 09:37:31 |
vsfemasr |
2.0.0 |
FeMaSR function for VapourSynth |
2024-06-02 12:11:21 |
vsrife |
5.1.0 |
RIFE function for VapourSynth |
2024-05-30 10:47:14 |
triton-model-analyzer |
1.40.0 |
Triton Model Analyzer is a tool to profile and analyze the runtime performance of one or more models on the Triton Inference Server |
2024-05-25 02:04:41 |
tritonclient |
2.46.0 |
Python client library and utilities for communicating with Triton Inference Server |
2024-05-25 01:48:03 |
vsdpir |
4.1.0 |
DPIR function for VapourSynth |
2024-05-19 06:32:35 |
triton-model-navigator |
0.9.0 |
Triton Model Navigator: An inference toolkit for optimizing and deploying machine learning models and pipelines on the Triton Inference Server and PyTriton. |
2024-05-07 21:51:54 |
tensorrt-yolo |
3.0.1 |
TensorRT-YOLO: Support YOLOv5, YOLOv8, YOLOv9, PP-YOLOE using TensorRT acceleration with EfficientNMS! |
2024-04-23 08:34:08 |
optimum-nvidia |
0.1.0b6 |
Optimum Nvidia is the interface between the Hugging Face Transformers and NVIDIA GPUs. " |
2024-04-11 21:13:38 |